Fixed Point Iteration with Inexact Function Values
نویسندگان
چکیده
In many iterative schemes, the precision of each step depends on the computational effort spent on that step. A method of specifying a suitable amount of computation at each step is described. The approach is adaptive and aimed at minimizing the overall computational cost subject to attaining a final iterate that satisfies a suitable error criterion. General and particular cost functions are considered, and a numerical example is given.
منابع مشابه
Termination criteria for inexact fixed-point schemes
We analyze inexact fixed point iterations where the generating function contains an inexact solve of an equation system to answer the question of how tolerances for the inner solves influence the iteration error of the outer fixed point iteration. Important applications are the Picard iteration and partitioned fluid structure interaction. We prove that the iteration converges irrespective of ho...
متن کاملGlobal convergence of an inexact interior-point method for convex quadratic symmetric cone programming
In this paper, we propose a feasible interior-point method for convex quadratic programming over symmetric cones. The proposed algorithm relaxes the accuracy requirements in the solution of the Newton equation system, by using an inexact Newton direction. Furthermore, we obtain an acceptable level of error in the inexact algorithm on convex quadratic symmetric cone programmin...
متن کاملA New Inexact Inverse Subspace Iteration for Generalized Eigenvalue Problems
In this paper, we represent an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax = Bx [Q. Ye and P. Zhang, Inexact inverse subspace iteration for generalized eigenvalue problems, Linear Algebra and its Application, 434 (2011) 1697-1715 ]. In particular, the linear convergence property of the inverse subspace iteration is preserved.
متن کاملA Fixed-Point of View on Gradient Methods for Big Data
Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive data sets (big data). In particular, stochastic gradient methods are considered the de-facto standard for training deep neural netwo...
متن کاملSolving time-fractional chemical engineering equations by modified variational iteration method as fixed point iteration method
The variational iteration method(VIM) was extended to find approximate solutions of fractional chemical engineering equations. The Lagrange multipliers of the VIM were not identified explicitly. In this paper we improve the VIM by using concept of fixed point iteration method. Then this method was implemented for solving system of the time fractional chemical engineering equations. The ob...
متن کامل